Robust Variable Selection Based on Relaxed Lad Lasso
نویسندگان
چکیده
Least absolute deviation is proposed as a robust estimator to solve the problem when error has an asymmetric heavy-tailed distribution or outliers. In order be insensitive above situation and select truly important variables from large number of predictors in linear regression, this paper introduces two-stage variable selection method named relaxed lad lasso, which enables model obtain sparse solutions presence outliers errors by combining least with lasso. Compared not only immune rapid growth noise but also maintains better convergence rate, Opn−1/2. addition, we prove that lasso property consistency at samples; is, selects high probability one. Through simulation empirical results, further verify outstanding performance terms prediction accuracy correct informative under distribution.
منابع مشابه
Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso
The least absolute deviation (LAD) regression is a useful method for robust regression, and the least absolute shrinkage and selection operator (lasso) is a popular choice for shrinkage estimation and variable selection. In this article we combine these two classical ideas together to produce LAD-lasso. Compared with the LAD regression, LAD-lasso can do parameter estimation and variable selecti...
متن کاملRelaxed Lasso
The Lasso is an attractive regularisation method for high dimensional regression. It combines variable selection with an efficient computational procedure. However, the rate of convergence of the Lasso is slow for some sparse high dimensional data, where the number of predictor variables is growing fast with the number of observations. Moreover, many noise variables are selected if the estimato...
متن کاملThresholded Lasso for High Dimensional Variable Selection
Given n noisy samples with p dimensions, where n " p, we show that the multi-step thresholding procedure based on the Lasso – we call it the Thresholded Lasso, can accurately estimate a sparse vector β ∈ R in a linear model Y = Xβ + ", where Xn×p is a design matrix normalized to have column #2-norm √ n, and " ∼ N(0,σIn). We show that under the restricted eigenvalue (RE) condition (BickelRitov-T...
متن کاملRegularizing Lasso: a Consistent Variable Selection Method
Table 1 provides the average computational time (in minutes) for the eight methods under the simulation settings. SIS clearly requires the least computational effort, whereas RLASSO as well as Scout require much longer computational time. But all methods except RLASSO(CLIME) can be computed under a reasonable amount of time for p = 5000 and n = 100. RLASSO(CLIME) takes much longer because of in...
متن کاملAdaptive Robust Variable Selection.
Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ul...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Symmetry
سال: 2022
ISSN: ['0865-4824', '2226-1877']
DOI: https://doi.org/10.3390/sym14102161